3 research outputs found

    Mejora de computaci贸n neurom贸rfica con arquitecturas avanzadas de redes neuronales por impulsos

    Get PDF
    La computaci贸n neurom贸rfica (NC, del ingl茅s neuromorphic computing) pretende revolucionar el campo de la inteligencia artificial. Implica dise帽ar e implementar sistemas electr贸nicos que simulen el comportamiento de las neuronas biol贸gicas utilizando hardware especializado, como matrices de puertas programables en campo (FPGA, del ingl麓es field-programmable gate array) o chips neurom贸rficos dedicados [1, 2]. NC est谩 dise帽ado para ser altamente eficiente, optimizado para bajo consumo de energ铆a y alto paralelismo [3]. Estos sistemas son adaptables a entornos cambiantes y pueden aprender durante la operaci贸n, lo que los hace muy adecuados para resolver problemas din谩micos e impredecibles [4]. Sin embargo, el uso de NC para resolver problemas de la vida real actualmente est谩 limitado porque el rendimiento de las redes neuronales por impulsos (SNN), las redes neuronales empleadas en NC, no es tan alta como el de los sistemas de computaci贸n tradicionales, como los alcanzados en dispositivos de aprendizaje profundo especializado, en t茅rminos de precisi贸n y velocidad de aprendizaje [5, 6]. Varias razones contribuyen a la brecha de rendimiento: los SNN son m谩s dif铆ciles de entrenar debido a que necesitan algoritmos de entrenamiento especializados [7, 8]; son m谩s sensibles a hiperpar谩metros, ya que son sistemas din谩micos con interacciones complejas [9], requieren conjuntos de datos especializados (datos neurom贸rficos) que actualmente son escasos y de tama帽o limitado [10], y el rango de funciones que los SNN pueden aproximar es m谩s limitado en comparaci贸n con las redes neuronales artificiales (ANN) tradicionales [11]. Antes de que NC pueda tener un impacto m谩s significativo en la IA y la tecnolog铆a inform谩tica, es necesario abordar estos desaf铆os relacionados con los SNN.This dissertation addresses current limitations of neuromorphic computing to create energy-efficient and adaptable artificial intelligence systems. It focuses on increasing utilization of neuromorphic computing by designing novel architectures that improve the performance of the spiking neural networks. Specifically, the architectures address the issues of training complexity, hyperparameter selection, computational flexibility, and scarcity of training data. The first proposed architecture utilizes auxiliary learning to improve training performance and data usage, while the second architecture leverages neuromodulation capability of spiking neurons to improve multitasking classification performance. The proposed architectures are tested on the Intel鈥檚 Loihi2 neuromorphic computer using several neuromorphic data sets, such as NMIST, DVSCIFAR10, and DVS128-Gesture. Results presented in this dissertation demonstrate the potential of the proposed architectures, but also reveal some limitations that are proposed as future work

    MT-SNN: Spiking Neural Network that Enables Single-Tasking of Multiple Tasks

    Full text link
    In this paper we explore capabilities of spiking neural networks in solving multi-task classification problems using the approach of single-tasking of multiple tasks. We designed and implemented a multi-task spiking neural network (MT-SNN) that can learn two or more classification tasks while performing one task at a time. The task to perform is selected by modulating the firing threshold of leaky integrate and fire neurons used in this work. The network is implemented using Intel's Lava platform for the Loihi2 neuromorphic chip. Tests are performed on dynamic multitask classification for NMNIST data. The results show that MT-SNN effectively learns multiple tasks by modifying its dynamics, namely, the spiking neurons' firing threshold.Comment: 4 pages, 2 figure

    Improving spiking neural network performance with auxiliary learning

    Get PDF
    The use of back propagation through the time learning rule enabled the supervised training of deep spiking neural networks to process temporal neuromorphic data. However, their performance is still below non-spiking neural networks. Previous work pointed out that one of the main causes is the limited number of neuromorphic data currently available, which are also difficult to generate. With the goal of overcoming this problem, we explore the usage of auxiliary learning as a means of helping spiking neural networks to identify more general features. Tests are performed on neuromorphic DVS-CIFAR10 and DVS128-Gesture datasets. The results indicate that training with auxiliary learning tasks improves their accuracy, albeit slightly. Different scenarios, including manual and automatic combination losses using implicit differentiation, are explored to analyze the usage of auxiliary tasks
    corecore